23 research outputs found
The AeroSonicDB (YPAD-0523) Dataset for Acoustic Detection and Classification of Aircraft
The time and expense required to collect and label audio data has been a
prohibitive factor in the availability of domain specific audio datasets. As
the predictive specificity of a classifier depends on the specificity of the
labels it is trained on, it follows that finely-labelled datasets are crucial
for advances in machine learning. Aiming to stimulate progress in the field of
machine listening, this paper introduces AeroSonicDB (YPAD-0523), a dataset of
low-flying aircraft sounds for training acoustic detection and classification
systems. This paper describes the method of exploiting ADS-B radio
transmissions to passively collect and label audio samples. Provides a summary
of the collated dataset. Presents baseline results from three binary
classification models, then discusses the limitations of the current dataset
and its future potential. The dataset contains 625 aircraft recordings ranging
in event duration from 18 to 60 seconds, for a total of 8.87 hours of aircraft
audio. These 625 samples feature 301 unique aircraft, each of which are
supplied with 14 supplementary (non-acoustic) labels to describe the aircraft.
The dataset also contains 3.52 hours of ambient background audio ("silence"),
as a means to distinguish aircraft noise from other local environmental noises.
Additionally, 6 hours of urban soundscape recordings (with aircraft
annotations) are included as an ancillary method for evaluating model
performance, and to provide a testing ground for real-time applications
A review of augmented reality applications for ship bridges
We present a state-of-the art analysis of Augmented Reality (AR) applications for ship bridge operation. We compiled and reviewed what type of use cases were published, what type of maritime applications have been adapted to AR, how they were prototyped and evaluated and what type of technology was used. We also reviewed the user interaction mechanisms, information display and adaptation to maritime environmental conditions.
Our analysis shows that although there are many examples of AR applications in ship bridges, there is still much work that needs to be done before these solutions can be suitably adapted to commercial settings. In addition, we argue there is a need to develop design requirements and regulations that can guide the safe development of AR
Development of an augmented reality concept for icebreaker assistance and convoy operations
A vessel convoy is a complex and highârisk operation completed during icebreaking operations in the Arctic. Icebreaker navigators need to continuously communicate with their crew while monitoring information such as speed, heading, and distance between vessels in the convoy. This paper presents an augmented reality user interface concept, which aims to support navigators by improving oversight and safety during convoy operations. The concept demonstrates how augmented reality can help to realize a situated user interface that adapts to userâs physical and operational contexts. The concept was developed through a humanâcentered design process and tested through a virtual reality simulator in a usability study involving seven mariners. The results suggest that augmented reality has the potential to improve the safety of convoy operations by integrating distributed information with headsâup access to operationâcritical information. However, the user interface concept is still novel, and further work is needed to develop the concept and safely integrate augmented reality into maritime operations
Is Dispositional Self-Compassion Associated With Psychophysiological Flexibility Beyond Mindfulness? An Exploratory Pilot Study
Background: Dispositional mindfulness and self-compassion are shown to associate with less self-reported emotional distress. However, previous studies have indicated that dispositional self-compassion may be an even more important buffer against such distress than dispositional mindfulness. To our knowledge, no study has yet disentangled the relationship between dispositional self-compassion and mindfulness and level of psychophysiological flexibility as measured with vagally mediated heart rate variability (vmHRV). The aim was thus to provide a first exploratory effort to expand previous research relying on self-report measures by including a psychophysiological measure indicative of emotional stress reactivity.
Methods: Fifty-three university students filled out the âFive Facet Mindfulness Questionnaireâ (FFMQ) and the âSelf-Compassion Scaleâ (SCS), and their heart rate was measured during a 5 min resting electrocardiogram. Linear hierarchical regression analyses were conducted to examine the common and unique variance explained by the total scores of the FFMQ and the SCS on level of resting vmHRV.
Results: Higher SCS total scores associated significantly with higher levels of vmHRV also when controlling for the FFMQ total scores. The SCS uniquely explained 7% of the vmHRV. The FFMQ total scores did not associate with level of vmHRV.
Conclusion: These results offer preliminary support that dispositional self-compassion associates with better psychophysiological regulation of emotional arousal above and beyond mindfulness.publishedVersio
LSST: from Science Drivers to Reference Design and Anticipated Data Products
(Abridged) We describe here the most ambitious survey currently planned in
the optical, the Large Synoptic Survey Telescope (LSST). A vast array of
science will be enabled by a single wide-deep-fast sky survey, and LSST will
have unique survey capability in the faint time domain. The LSST design is
driven by four main science themes: probing dark energy and dark matter, taking
an inventory of the Solar System, exploring the transient optical sky, and
mapping the Milky Way. LSST will be a wide-field ground-based system sited at
Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m
effective) primary mirror, a 9.6 deg field of view, and a 3.2 Gigapixel
camera. The standard observing sequence will consist of pairs of 15-second
exposures in a given field, with two such visits in each pointing in a given
night. With these repeats, the LSST system is capable of imaging about 10,000
square degrees of sky in a single filter in three nights. The typical 5
point-source depth in a single visit in will be (AB). The
project is in the construction phase and will begin regular survey operations
by 2022. The survey area will be contained within 30,000 deg with
, and will be imaged multiple times in six bands, ,
covering the wavelength range 320--1050 nm. About 90\% of the observing time
will be devoted to a deep-wide-fast survey mode which will uniformly observe a
18,000 deg region about 800 times (summed over all six bands) during the
anticipated 10 years of operations, and yield a coadded map to . The
remaining 10\% of the observing time will be allocated to projects such as a
Very Deep and Fast time domain survey. The goal is to make LSST data products,
including a relational database of about 32 trillion observations of 40 billion
objects, available to the public and scientists around the world.Comment: 57 pages, 32 color figures, version with high-resolution figures
available from https://www.lsst.org/overvie
CMB-S4
We describe the stage 4 cosmic microwave background ground-based experiment CMB-S4
Environmental sound classification on microcontrollers using Convolutional Neural Networks
Noise is a growing problem in urban areas, and according to the WHO is the second environmental cause of health problems in Europe. Noise monitoring using Wireless Sensor Networks are being applied in order to understand and help mitigate these noise problems. It is desirable that these sensor systems, in addition to logging the sound level, can indicate what the likely sound source is. However, transmitting audio to a cloud system for classification is energy-intensive and may cause privacy issues. It is also critical for widespread adoption and dense sensor coverage that individual sensor nodes are low-cost. Therefore we propose to perform the noise classification on the sensor node, using a low-cost microcontroller.
Several Convolutional Neural Networks were designed for the STM32L476 low-power microcontroller using the Keras deep-learning framework, and deployed using the vendorprovided X-CUBE-AI inference engine. The resource budget for the model was set at maximum 50% utilization of CPU, RAM, and FLASH. 10 model variations were evaluated on the Environmental Sound Classification task using the standard Urbansound8k dataset.
The best models used Depthwise-Separable convolutions with striding for downsampling, and were able to reach 70.9% mean 10-fold accuracy while consuming only 20% CPU. To our knowledge, this is the highest reported performance on Urbansound8k using a microcontroller. One of the models was also tested on a microcontroller development device, demonstrating the classification of environmental sounds in real-time.
These results indicate that it is computationally feasible to classify environmental sound on low-power microcontrollers. Further development should make it possible to create wireless sensor-networks for noise monitoring with on-edge noise source classification.M-D
Environmental sound classification on microcontrollers using Convolutional Neural Networks
Noise is a growing problem in urban areas, and according to the WHO is the second environmental cause of health problems in Europe. Noise monitoring using Wireless Sensor Networks are being applied in order to understand and help mitigate these noise problems. It is desirable that these sensor systems, in addition to logging the sound level, can indicate what the likely sound source is. However, transmitting audio to a cloud system for classification is energy-intensive and may cause privacy issues. It is also critical for widespread adoption and dense sensor coverage that individual sensor nodes are low-cost. Therefore we propose to perform the noise classification on the sensor node, using a low-cost microcontroller.
Several Convolutional Neural Networks were designed for the STM32L476 low-power microcontroller using the Keras deep-learning framework, and deployed using the vendorprovided X-CUBE-AI inference engine. The resource budget for the model was set at maximum 50% utilization of CPU, RAM, and FLASH. 10 model variations were evaluated on the Environmental Sound Classification task using the standard Urbansound8k dataset.
The best models used Depthwise-Separable convolutions with striding for downsampling, and were able to reach 70.9% mean 10-fold accuracy while consuming only 20% CPU. To our knowledge, this is the highest reported performance on Urbansound8k using a microcontroller. One of the models was also tested on a microcontroller development device, demonstrating the classification of environmental sounds in real-time.
These results indicate that it is computationally feasible to classify environmental sound on low-power microcontrollers. Further development should make it possible to create wireless sensor-networks for noise monitoring with on-edge noise source classification
2004: Annual Disease Nursery Report
unpublishednot peer reviewe